To extend the scope of coding queries to more realistic settings, we propose ODEX, the first open-domain execution-based natural language (NL) to code generation dataset. ODEX has 945 NL-Code pairs spanning 79 diverse libraries, along with 1,707 human-written test cases for execution. Our NL-Code pairs are harvested from StackOverflow forums to encourage natural and practical coding queries, which are then carefully rephrased to ensure intent clarity and prevent potential data memorization. Moreover, ODEX supports four natural languages as intents, in English, Spanish, Japanese, and Russian. ODEX unveils intriguing behavioral differences between top-performing Code LMs: Codex performs better on open-domain queries, yet CodeGen captures a better balance between open- and closed-domain. ODEX corroborates the merits of execution-based evaluation over metrics without execution but also unveils their complementary effects. Powerful models such as CodeGen-6B only achieve an 11.96 pass rate at top-1 prediction, suggesting plenty of headroom for improvement. We release ODEX to facilitate research into open-domain problems for the code generation community.
translated by 谷歌翻译
We address the general task of structured commonsense reasoning: given a natural language input, the goal is to generate a graph such as an event -- or a reasoning-graph. To employ large language models (LMs) for this task, existing approaches ``serialize'' the output graph as a flat list of nodes and edges. Although feasible, these serialized graphs strongly deviate from the natural language corpora that LMs were pre-trained on, hindering LMs from generating them correctly. In this paper, we show that when we instead frame structured commonsense reasoning tasks as code generation tasks, pre-trained LMs of code are better structured commonsense reasoners than LMs of natural language, even when the downstream task does not involve source code at all. We demonstrate our approach across three diverse structured commonsense reasoning tasks. In all these natural language tasks, we show that using our approach, a code generation LM (CODEX) outperforms natural-LMs that are fine-tuned on the target task (e.g., T5) and other strong LMs such as GPT-3 in the few-shot setting.
translated by 谷歌翻译
天然语言对代码模型学会生成具有自然语言(NL)意图的代码段。但是,由于每天引入新的库和功能,因此不可能使用培训示例来覆盖所有API的公开库和专有库和功能的快速增长。因此,现有模型本质上不能仅通过将它们纳入培训数据而概括地使用看不见的功能和库。相反,当人类程序员编写程序时,他们经常指文本资源,例如代码手册,文档和教程,以探索和理解可用的库功能。受此观察的启发,我们介绍了Doccoder:一种方法,该方法通过(1)检索给定NL意图的相关文档明确利用代码手册和文档,以及(2)基于NL意图和检索到的文档生成代码。我们的方法是一般的,可以应用于任何编程语言,并且对基础神经模型不可知。我们证明,Doccoder始终改善NL-TO-代码模型:DOCCODER在新的Bash数据集TLDR上的强基准比强基础高11倍;在受欢迎的Python Conala基准中,Doccoder在强大的基线上提高了1.65 BLEU。
translated by 谷歌翻译
句子完成(SC)问题提出了一个或多个需要填写的空白,三到五个可能的单词或短语作为选项。SC问题被广泛用于学习英语作为第二语言(ESL)的学生。在本文中,我们提出了一个大规模的SC数据集,\ textsc {sc-ques},该数据由292,517 ESL SC的问题组成,来自现实世界中标准化英语考试。此外,我们通过在提出的\ textsc {sc-ques}数据集上训练大规模的预训练语言模型来自动解决SC问题的全面基准。我们对基线模型的性能,限制和权衡进行详细分析。数据和我们的代码可用于研究目的:\ url {https://github.com/ai4ed/sc-ques}。
translated by 谷歌翻译
在线对话说明是在现实世界在线教育环境中使用的一系列教学说明,以激励学生,帮助了解学习材料并建立有效的学习习惯。尽管在线学习的受欢迎程度和优势,但教育技术和教育数据挖掘社区仍然缺乏缺乏大规模,高质量和良好的教学教学指导数据集来研究计算方法,以自动检测在线对话说明并进一步提高在线教学效果。因此,在本文中,我们提供了一个在线对话说明检测的数据集\ textsc {dialogId},其中包含30,431个有效的对话说明。这些教学说明很好地注释分为8个类别。此外,我们还利用了普遍的预训练的语言模型(PLM),并提出一个简单而有效的对抗训练学习范式来提高对话指导检测的质量和概括。广泛的实验表明,我们的方法的表现优于多种基线方法。数据和我们的代码可用于研究目的:\ url {https://github.com/ai4ed/dialogid}。
translated by 谷歌翻译
我们提出了一种简单但有效的方法,建议为学生提供高质量和多样性的练习。我们的方法由三个关键组成部分组成:(1)候选生成模块;(2)促进多样性的模块;(3)范围限制模块。提出的方法在召回方面提高了总体建议性能,与基线相比,推荐候选者的多样性增加了0.81 \%。
translated by 谷歌翻译
知识跟踪(KT)是使用学生的历史学习互动数据来对其知识掌握的任务,以便对他们未来的互动绩效进行预测。最近,使用各种深度学习技术来解决KT问题已经取得了显着的进步。但是,基于深度学习的知识追踪(DLKT)方法的成功仍然有些神秘,适当的测量以及对这些DLKT方法的分析仍然是一个挑战。首先,现有作品中的数据预处理程序通常是私人和/或自定义,这限制了实验标准化。此外,现有的DLKT研究通常在评估方案方面有所不同,并且是现实世界中的教育环境。为了解决这些问题,我们介绍了一个综合基于Python的基准平台\ TextSc {Pykt},以确保通过彻底评估进行跨DLKT方法的有效比较。 \ textsc {pykt}库由不同域的7个流行数据集上的一组标准化的数据预处理程序组成,而10个经常比较了用于透明实验的DLKT模型实现。我们细粒度和严格的经验KT研究的结果产生了一系列观察结果和有效DLKT的建议,例如,错误的评估设置可能会导致标签泄漏,这通常会导致性能膨胀;与Piech等人提出的第一个DLKT模型相比,许多DLKT方法的改进是最小的。 \ cite {piech2015 -Deep}。我们已经开源\ textsc {pykt},并在\ url {https://pykt.org/}上进行了实验结果。我们欢迎其他研究小组和从业人员的贡献。
translated by 谷歌翻译
口头问题答案(SQA)是要从一个问题中找到口语文件的答案,这对于个人助理回复用户的查询至关重要。现有的SQA方法均取决于自动语音识别(ASR)成绩单。不仅需要对ASR进行大量的注释数据,这些数据是时间且成本良好的低资源语言的收集,而且更重要的是,问题的答案通常包括名称实体或不可能的唱片词正确识别。此外,ASR旨在最大程度地减少所有单词的识别错误,包括与SQA任务无关的许多函数单词。因此,尽管非常困难,但始终是高度期望的无ASR转录本(无文本)的SQA。这项工作提出了离散的口语自适应学习(双重),利用未标记的数据进行预训练,并通过SQA下游任务进行了微调。口语答案的时间间隔可以直接从口语文件预测。我们还发布了一个新的SQA基准语料库NMSQA,以了解具有更现实的方案的数据。我们从经验上表明,双重收益结果与通过级联ASR和文本质量质量质量质量质量质量质量质量质量质量质量质量质量质量质量数据相媲美,并与现实世界中的数据相当。我们的代码和模型将是开源的。
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译